40 research outputs found
Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
Imitation learning has traditionally been applied to learn a single task from
demonstrations thereof. The requirement of structured and isolated
demonstrations limits the scalability of imitation learning approaches as they
are difficult to apply to real-world scenarios, where robots have to be able to
execute a multitude of tasks. In this paper, we propose a multi-modal imitation
learning framework that is able to segment and imitate skills from unlabelled
and unstructured demonstrations by learning skill segmentation and imitation
learning jointly. The extensive simulation results indicate that our method can
efficiently separate the demonstrations into individual skills and learn to
imitate them using a single multi-modal policy. The video of our experiments is
available at http://sites.google.com/view/nips17intentionganComment: Paper accepted to NIPS 201
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Reinforcement learning (RL) algorithms for real-world robotic applications
need a data-efficient learning process and the ability to handle complex,
unknown dynamical systems. These requirements are handled well by model-based
and model-free RL approaches, respectively. In this work, we aim to combine the
advantages of these two types of methods in a principled manner. By focusing on
time-varying linear-Gaussian policies, we enable a model-based algorithm based
on the linear quadratic regulator (LQR) that can be integrated into the
model-free framework of path integral policy improvement (PI2). We can further
combine our method with guided policy search (GPS) to train arbitrary
parameterized policies such as deep neural networks. Our simulation and
real-world experiments demonstrate that this method can solve challenging
manipulation tasks with comparable or better performance than model-free
methods while maintaining the sample efficiency of model-based methods. A video
presenting our results is available at
https://sites.google.com/site/icml17pilqrComment: Paper accepted to the International Conference on Machine Learning
(ICML) 201
Learning to Interactively Learn and Assist
When deploying autonomous agents in the real world, we need effective ways of
communicating objectives to them. Traditional skill learning has revolved
around reinforcement and imitation learning, each with rigid constraints on the
format of information exchanged between the human and the agent. While scalar
rewards carry little information, demonstrations require significant effort to
provide and may carry more information than is necessary. Furthermore, rewards
and demonstrations are often defined and collected before training begins, when
the human is most uncertain about what information would help the agent. In
contrast, when humans communicate objectives with each other, they make use of
a large vocabulary of informative behaviors, including non-verbal
communication, and often communicate throughout learning, responding to
observed behavior. In this way, humans communicate intent with minimal effort.
In this paper, we propose such interactive learning as an alternative to reward
or demonstration-driven learning. To accomplish this, we introduce a
multi-agent training framework that enables an agent to learn from another
agent who knows the current task. Through a series of experiments, we
demonstrate the emergence of a variety of interactive learning behaviors,
including information-sharing, information-seeking, and question-answering.
Most importantly, we find that our approach produces an agent that is capable
of learning interactively from a human user, without a set of explicit
demonstrations or a reward function, and achieving significantly better
performance cooperatively with a human than a human performing the task alone.Comment: AAAI 2020. Video overview at https://youtu.be/8yBvDBuAPrw, paper
website with videos and interactive game at
http://interactive-learning.github.io